-
Notifications
You must be signed in to change notification settings - Fork 100
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updated version #82
Updated version #82
Conversation
…w amount of noise added
… homogenized plot style
Check out this pull request on Review Jupyter notebook visual diffs & provide feedback on notebooks. Powered by ReviewNB |
View / edit / reply to this conversation on ReviewNB Vaibhavdixit02 commented on 2020-08-09T05:03:07Z It would be better to delete this cell arnaudmgh commented on 2020-08-09T11:43:02Z I'll do that last, because I am not sure how well the notebook will work if I don't have it at all Vaibhavdixit02 commented on 2020-08-09T11:53:23Z That's okay. Just delete the cell after running it, and then save and push, I think that should be fine?
Though not very sure, notebooks get broken very fast. Do you think you could run the conversion to markdown as well? https://github.com/TuringLang/TuringTutorials/blob/master/render-example.bash
arnaudmgh commented on 2020-08-09T15:11:35Z OK I've done thaton my fork - hopefully it shows on the pull request?
|
View / edit / reply to this conversation on ReviewNB Vaibhavdixit02 commented on 2020-08-09T05:03:08Z Curious, why turn off the progress bars? arnaudmgh commented on 2020-08-09T11:42:17Z Oh this was there before I edited. I think it's because it create one line of output every time the progress bar changes, making big outputs. Vaibhavdixit02 commented on 2020-08-09T15:01:25Z Ah yes, okay
|
@ChrisRackauckas any comments? |
Oh this was there before I edited. I think it's because it create one line of output every time the progress bar changes, making big outputs. View entire conversation on ReviewNB |
I'll do that last, because I am not sure how well the notebook will work if I don't have it at all View entire conversation on ReviewNB |
That's okay. Just delete the cell after running it, and then save and push, I think that should be fine?
Though not very sure, notebooks get broken very fast. Do you think you could run the conversion to markdown as well? https://github.com/TuringLang/TuringTutorials/blob/master/render-example.bash cc: @cpfiffer View entire conversation on ReviewNB |
Note that the first time I run the last version with |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:37Z Maybe just add arnaudmgh commented on 2020-08-09T16:19:30Z Will do |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:38Z Maybe add a arnaudmgh commented on 2020-08-09T16:43:22Z OK, done |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:39Z IMO it would be better and more instructive to retrieve the parameters by names. Users shouldn't work with arnaudmgh commented on 2020-08-09T16:37:37Z What syntax do you propose? devmotion commented on 2020-08-09T20:13:07Z Similar to (using the correct unicode characters; strings should work as well). Then you get a threedimensional array of dimensions samples x parameters x chains that you can loop over. arnaudmgh commented on 2020-08-10T03:22:43Z A problem I had with this notation is that Array(chain[[:δ, :γ, :α, :β]], append_chains=false) == Array(chain[[:α, :β, :γ, :δ]], append_chains=false) is true!
It seems to return the variables in the alphabetical order no matter the order we specify. So if in the model function the parameters are not assigned in the alphabetical order, the results will be wrong (which is the case in the current tutorial by the way). This is not the case when using integers (i.e. integer indexing works as expected), so that's why I used them.
devmotion commented on 2020-08-10T06:14:07Z I see, there's apparently a bug in AxisArrays (https://github.com/JuliaArrays/AxisArrays.jl/issues/182) which causes this behaviour. I still think that it would be better to use strings or symbols, but then we should also add a comment (just in the code) about this behaviour. IMO it's impossible to understand the code if using indices, and it will also break (silently) if you use a different inference method or Turing changes the internal variables or statistics for the NUTS sampler.
BTW I guess we should also just write arnaudmgh commented on 2020-08-13T05:03:02Z Thanks for the comments David, I incorporated these modifications in |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:39Z IMO the code is a bit difficult to read, in particular the definition of the priors. Maybe split it over multiple lines? |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:40Z Again maybe use
|
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:41Z Here as well. |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:41Z It's a bit strange to use Vaibhavdixit02 commented on 2020-08-13T10:27:08Z Looks like the below definition doesn't work? I suspect the history function is expected to be a vector matching the dependent variable's length?
function delay_lotka_volterra(du,u,h,p,t) x, y = u α,β,γ,δ = p du[1] = α*h(p,t-1) - β*x*y du[2] = -γ*y + δ*x*y end Vaibhavdixit02 commented on 2020-08-13T10:27:31Z This works
function delay_lotka_volterra(du,u,h,p,t) x, y = u α,β,γ,δ = p du[1] = α*h(p,t-1)[1] - β*x*y du[2] = -γ*y + δ*x*y end devmotion commented on 2020-08-13T17:05:20Z Looks like the below definition doesn't work? I suspect the history function is expected to be a vector matching the dependent variable's length?
Yes, and hence the version in the last comment shouldn't be used either (the initial history function should return a vector as well). During the numerical integration process the solver will call
Hence one should use function delay_lotka_volterra(du, u, h, p, t) x, y = u α, β, γ, δ = p du[1] = α * h(p, t-1)[1] - β * x * y du[2] = -γ * y + δ * x * y return end or to save allocations (as suggested above) function delay_lotka_volterra(du, u, h, p, t) |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:42Z Some comments
|
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:43Z This is not correct, it seems you don't use multithreading in the code below.
|
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-09T13:36:43Z Again it would be better to not use |
View / edit / reply to this conversation on ReviewNB ChrisRackauckas commented on 2020-08-09T14:48:03Z Should we just remove this section? Vaibhavdixit02 commented on 2020-08-09T15:05:11Z Not a 100% sure, it might be good to get the benchmarks updated separately and remove it from here. arnaudmgh commented on 2020-08-09T16:04:30Z I kind of agree that it will flow better without this section here; for an intro, it's pretty awesome to see you can just use DiffEq problems and solver seamlessly in the Turing model, DiffEqBayes is a bit of a distraction from the aim, learning Turing with differential equation models. (I am correct in the text that what DiffEq bayes is adding is that it can interface with STAN.jl and other packages?). Perhaps a quick reference to the package would be enough?
In the same vein, should we may-be reorder as
|
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-14T10:39:52Z Should probably be removed? |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-14T10:39:52Z Same here, |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-14T10:39:53Z Maybe be consistent and use the same way for defining the problem as above?
|
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-14T10:39:54Z Same here, maybe be consistent?
|
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-14T10:39:55Z Same here? |
Can we maybe change the logging level for this notebook to avoid all the warnings from AdvancedHMC while tuning the step size? |
I couldn't get it to work from the
env/bin/jupyter-nbconvert "$filename" --to notebook --inplace --execute --ExecutePreprocessor.timeout=-1 --ExecutePreprocessor.startup_timeout=120 View entire conversation on ReviewNB |
Did you run the whole script? I guess there could be issues with View entire conversation on ReviewNB |
I ran the required parts, I had to modify the script to suit my system so it isn't identical. The above line had a hardcoded kernel name which I removed to allow me to run it. I am not sure of the interaction between Julia environment and the Jupyter notebook kernel and if that can be ensured while running this. If you have any suggestions I can include it? Maybe we should ask Cameron? View entire conversation on ReviewNB |
Maybe the problem is that you use a custom kernel that doesn't include the flag View entire conversation on ReviewNB |
Yeah, these should ultimately be run by CI, but I haven't gotten around to setting it up yet. For the moment, I will rerun the markdown file on my machine. I think the only remaining things to address here are @devmotion's comments above. |
Thanks for addressing the review @arnaudmgh. cc: @cpfiffer |
View / edit / reply to this conversation on ReviewNB cpfiffer commented on 2020-08-16T16:48:52Z This should be
Additionally, it would be nice if we changed the logging level here to hide all the AdvancedHMC warnings. Same for the code under the "Direct Handling of Bayesian Estimation with Turing" header.
You can do this by defining a new logger:
using Logging arnaudmgh commented on 2020-08-20T04:38:03Z Agreed - just to be sure, do we have to put that on every call to devmotion commented on 2020-08-20T06:54:04Z You could also use
cpfiffer commented on 2020-08-20T13:54:47Z I like that better. Just stick this in the setup block at the top of the notebook:
using Logging
|
Bump @arnaudmgh ? |
Agreed - just to be sure, do we have to put that on every call to View entire conversation on ReviewNB |
You could also use
View entire conversation on ReviewNB |
I like that better. Just stick this in the setup block at the top of the notebook:
using Logging
View entire conversation on ReviewNB |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-21T15:19:51Z I still think this cell should be removed. The standard IJulia kernel activates the project in the current directory automatically when opening a notebook, it's not part of any other notebooks, and export seems to work fine for Cameron. |
View / edit / reply to this conversation on ReviewNB devmotion commented on 2020-08-21T15:19:52Z Actually, probably it is a lot simpler (see https://github.com/JuliaLang/julia/pull/33807) and you can just write # Do not show AdvancedHMC warnings. using Logging Logging.disable_logging(Logging.Warn) |
Okay, cool. Thanks! Anyone have anything else they want to add to this one? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@cpfiffer just fyi the markdown conversion would have to be done again, the markdown in this PR is now outdated and wasn't updated |
On it, thanks! |
In collaboration with @Vaibhavdixit02.
mapreduce
, one using Threads